151 research outputs found

    Manipulating Attributes of Natural Scenes via Hallucination

    Full text link
    In this study, we explore building a two-stage framework for enabling users to directly manipulate high-level attributes of a natural scene. The key to our approach is a deep generative network which can hallucinate images of a scene as if they were taken at a different season (e.g. during winter), weather condition (e.g. in a cloudy day) or time of the day (e.g. at sunset). Once the scene is hallucinated with the given attributes, the corresponding look is then transferred to the input image while preserving the semantic details intact, giving a photo-realistic manipulation result. As the proposed framework hallucinates what the scene will look like, it does not require any reference style image as commonly utilized in most of the appearance or style transfer approaches. Moreover, it allows to simultaneously manipulate a given scene according to a diverse set of transient attributes within a single model, eliminating the need of training multiple networks per each translation task. Our comprehensive set of qualitative and quantitative results demonstrate the effectiveness of our approach against the competing methods.Comment: Accepted for publication in ACM Transactions on Graphic

    Detecting Euphemisms with Literal Descriptions and Visual Imagery

    Full text link
    This paper describes our two-stage system for the Euphemism Detection shared task hosted by the 3rd Workshop on Figurative Language Processing in conjunction with EMNLP 2022. Euphemisms tone down expressions about sensitive or unpleasant issues like addiction and death. The ambiguous nature of euphemistic words or expressions makes it challenging to detect their actual meaning within a context. In the first stage, we seek to mitigate this ambiguity by incorporating literal descriptions into input text prompts to our baseline model. It turns out that this kind of direct supervision yields remarkable performance improvement. In the second stage, we integrate visual supervision into our system using visual imageries, two sets of images generated by a text-to-image model by taking terms and descriptions as input. Our experiments demonstrate that visual supervision also gives a statistically significant performance boost. Our system achieved the second place with an F1 score of 87.2%, only about 0.9% worse than the best submission.Comment: 7 pages, 1 table, 1 figure. Accepted to the 3rd Workshop on Figurative Language Processing at EMNLP 2022. https://github.com/ilkerkesen/euphemis

    MMIC VCO design

    Get PDF
    Ankara : Department of Electrical and Electronics Engineering and Institute of Engineering and Sciences, Bilkent Univ., 1995.Thesis (Master's) -- Bilkent University, 1995.Includes bibliographical references leaves 99-101In this study, three voltage controlled oscillator (VCO) circuits are realised using Monolithic Microwave Integrated Circuit (MMIC) technology. Two of the VCOs are in the capacitive feedback topology, whereas the last one is designed by using the inductive feedback topology. GaAs MESFETs are used as both active devices and varactor diodes. Designed for a 50il system, the circuits operate in 8.88-10.40GHz, 8.7T10.23GHz and 8.96-12.14GHz ranges. Their output powers are well above the 9.5dBm for most of the oscillation band. All three VCOs have harmonic suppressions better than 30dBc. Both small signal and large signal analysis are carried out. The layouts are designed by GEC Marconi’s F20 process rules and the circuits are produced in this foundryErdem, AykutM.S
    corecore